37 research outputs found
On some provably correct cases of variational inference for topic models
Variational inference is a very efficient and popular heuristic used in
various forms in the context of latent variable models. It's closely related to
Expectation Maximization (EM), and is applied when exact EM is computationally
infeasible. Despite being immensely popular, current theoretical understanding
of the effectiveness of variaitonal inference based algorithms is very limited.
In this work we provide the first analysis of instances where variational
inference algorithms converge to the global optimum, in the setting of topic
models.
More specifically, we show that variational inference provably learns the
optimal parameters of a topic model under natural assumptions on the topic-word
matrix and the topic priors. The properties that the topic word matrix must
satisfy in our setting are related to the topic expansion assumption introduced
in (Anandkumar et al., 2013), as well as the anchor words assumption in (Arora
et al., 2012c). The assumptions on the topic priors are related to the well
known Dirichlet prior, introduced to the area of topic modeling by (Blei et
al., 2003).
It is well known that initialization plays a crucial role in how well
variational based algorithms perform in practice. The initializations that we
use are fairly natural. One of them is similar to what is currently used in
LDA-c, the most popular implementation of variational inference for topic
models. The other one is an overlapping clustering algorithm, inspired by a
work by (Arora et al., 2014) on dictionary learning, which is very simple and
efficient.
While our primary goal is to provide insights into when variational inference
might work in practice, the multiplicative, rather than the additive nature of
the variational inference updates forces us to use fairly non-standard proof
arguments, which we believe will be of general interest.Comment: 46 pages, Compared to previous version: clarified notation, a number
of typos fixed throughout pape
Fit Like You Sample: Sample-Efficient Generalized Score Matching from Fast Mixing Markov Chains
Score matching is an approach to learning probability distributions
parametrized up to a constant of proportionality (e.g. Energy-Based Models).
The idea is to fit the score of the distribution, rather than the likelihood,
thus avoiding the need to evaluate the constant of proportionality. While
there's a clear algorithmic benefit, the statistical "cost'' can be steep:
recent work by Koehler et al. 2022 showed that for distributions that have poor
isoperimetric properties (a large Poincar\'e or log-Sobolev constant), score
matching is substantially statistically less efficient than maximum likelihood.
However, many natural realistic distributions, e.g. multimodal distributions as
simple as a mixture of two Gaussians in one dimension -- have a poor Poincar\'e
constant.
In this paper, we show a close connection between the mixing time of an
arbitrary Markov process with generator and an appropriately
chosen generalized score matching loss that tries to fit . If corresponds to a Markov process corresponding to a
continuous version of simulated tempering, we show the corresponding
generalized score matching loss is a Gaussian-convolution annealed score
matching loss, akin to the one proposed in Song and Ermon 2019. Moreover, we
show that if the distribution being learned is a finite mixture of Gaussians in
dimensions with a shared covariance, the sample complexity of annealed
score matching is polynomial in the ambient dimension, the diameter the means,
and the smallest and largest eigenvalues of the covariance -- obviating the
Poincar\'e constant-based lower bounds of the basic score matching loss shown
in Koehler et al. 2022. This is the first result characterizing the benefits of
annealing for score matching -- a crucial component in more sophisticated
score-based approaches like Song and Ermon 2019.Comment: 39 page